Skip to content

Record: 4096-Vocab + 4.0-MLP-mult + 0.085-WD + Simplifications — val_bpb 1.09785 (3-seed mean)#1218

Open
clarkkev wants to merge 1 commit intoopenai:mainfrom
clarkkev:submission/vocab4096-mlpmult4-wd085
Open

Record: 4096-Vocab + 4.0-MLP-mult + 0.085-WD + Simplifications — val_bpb 1.09785 (3-seed mean)#1218
clarkkev wants to merge 1 commit intoopenai:mainfrom
clarkkev:submission/vocab4096-mlpmult4-wd085

Conversation

@clarkkev
Copy link
Copy Markdown

@clarkkev clarkkev commented Apr 1, 2026

Record: 4096-Vocab + Larger Model + High WD + Simplifications — val_bpb 1.09785

val bpb: 1.09785 (3-seed mean, std=0.0004)

Seed Steps Pre-quant BPB Post-quant BPB Sliding BPB Artifact
42 5967 1.10411 1.11588 1.09744 15,915,268
1337 5962 1.10482 1.11631 1.09795 15,905,460
2025 5961 1.10507 1.11641 1.09816 15,927,782
Mean 1.10467 1.11620 1.09785 15,916,170

Overview

This script builds on the 03-23 leaderboard record. The main changes are:

Fixes

  • Fixed a small bug in the sliding window evaluation causing it to score tokens at the end of the val dataset multiple times. This bug didn't significantly affect results: it added roughly 2k duplicate contributions to the total loss and byte counts over a validation set of about 6M tokens. The faulty line was:
    window_starts = [ws for ws in range(0, total_tokens, stride) if min(ws + seq_len, total_tokens) - ws >= 1], and it should be:
    window_starts = [ws for ws in range(0, total_tokens, stride) if ws + seq_len - stride < total_tokens]

Simplifications

  • Use XSA in all layers instead of only the last 4.
  • Removed parameter banking and distributed muon implementation and instead just used Muon + DDP.
  • Removed test time training. I doubt that 0.1% additional tokens will improve the model generally, and for long docs I think it makes more sense to work on extending the sequence length.
  • Removed quantization-aware training, since it appeared to provide little or no benefit.
  • Removed gated attention.
  • Removed value residuals.
  • Removed hash embeddings, which are probably less necessary after increasing the vocab size.
  • Removed the smear gate, for the same reason.

Additions

  • Increased the vocabulary size from 1024 to 4096. I used the existing data/download_hf_docs_and_tokenize.py to build the sentencepiece tokenizer and pre-tokenized data. The tokenizer model grew by ~50kb, but even with that added, the final artifacts would be below the 16MB cap. A larger vocab means the model sees more context for the same sequence length and more train data per step.
  • Use a bigger but more strongly regularized model. I discovered that the compression ratio of a weight matrix (i.e., quantized-and-compressed-mb / raw-mb) correlates extremely well with the matrix's root-mean-square (torch.sqrt(torch.mean(x**2))) with an R^2 near 0.99. This suggests that the weight decay is a good lever for reducing the compressed size, which can let us add more parameters to the model. In particular this script uses:
    • Higher weight decays: muon weight decay increased 0.04 -> 0.085, and added an embeddings weight decay of 0.085. Additionally, decreased the adam weight decay 0.04 -> 0.02, as scalar parameters shouldn't need to be low-magnitude.
    • Wider MLPs, increasing mlp_mult 3 -> 4.
    • A decreased learning rate 0.025 -> 0.02, as larger models generally benefit from smaller LRs.
  • Added the coprime-stride data loader from #726. The benefit is that it avoids showing the model sequences from the same document in the same/nearby minibatches by jumping around the data files.
  • Added GPTQ Hessian-aware quantization. My implementation is based on #1060 and reserves some time from training for Hessian computation.
  • Use more efficient byte shuffle + brotli compression from #1089.
  • Added sigmoid-gated skip connections to the unet, also from #1089.
  • Increased qk_gain_init 1.5 -> 4 following #1125.

@clarkkev clarkkev changed the title 4096-Vocab + 4.0-MLP-mult + 0.085-WD + Simplifications — val_bpb 1.09785 (3-seed mean) Record: 4096-Vocab + 4.0-MLP-mult + 0.085-WD + Simplifications — val_bpb 1.09785 (3-seed mean) Apr 1, 2026
@mikeapedia
Copy link
Copy Markdown

Awesome results @clarkkev!

icryo added a commit to icryo/parameter-golf that referenced this pull request Apr 2, 2026
Strip complexity, bigger model, higher weight decay:
  MLP 3x → 4x (32.2M params vs 27M)
  MUON_WD 0.04 → 0.085 (better int6 compression)
  ADAM_WD 0.04 → 0.02 (scalars)
  BigramHash removed, VE removed
  QK_GAIN_INIT=4.0

PR openai#1218 proved this approach works: simplify + regularize = 1.098 on sp4096.
On Scylla (998 tokens): should fit ~15.9MB at high WD.
@abaybektursun
Copy link
Copy Markdown
Contributor

abaybektursun commented Apr 2, 2026

so elegant, thing of beauty my friend.
How did you discover that the compression ratio correlates with the matrix's root-mean-square?
In general what did you test and what experiments did you conduct before this?
Because there were a lot of great decisions made and it's not obvious how you made them. Is it just established expertise? Would you mind sharing? Thanks!

@NoesisGenesis
Copy link
Copy Markdown

@abaybektursun so elegant, thing of beauty my friend. How did you discover that the compression ratio correlates with the matrix's root-mean-square?

I am not the author of this PR, but I have spent some time on the same quantization-and-compression pipeline to have views on the moving parts.

Mostly I think the discovery itself is the boring part, and the mechanism is the interesting part.

The boring part is: you just instrument the exact export path matrix by matrix. For each tensor, log raw MB, quantized-and-compressed MB, and a few simple statistics on the float weights. Then scatter compressed_mb / raw_mb against those statistics. RMS jumps out very quickly.

The reason it jumps out is that, in this pipeline, RMS sits upstream of almost everything the exporter cares about. GPTQ is using rowwise scales, so if a matrix has lower RMS, its rows usually get smaller scales and therefore a finer effective grid. That pushes more coefficients into small quantized values, especially 0, ±1, ±2. Then selective pruning likes exactly those values, because they are the cheapest ones to zero. Then byte shuffle + Brotli likes the resulting stream, because the symbol histogram is more peaked and the scale bytes repeat more. So lower RMS changes the quantized representation into the exact kind of byte stream that the whole stack prefers.

I suspect the reason the R² got so absurdly high is that, in that regime, the matrices were fairly self-similar apart from scale. If the row max/RMS ratio and general histogram shape do not vary too much, then one scalar, RMS, ends up predicting most of the row-scale distribution seen by the quantizer, and from there a lot of the final compressed size is basically determined.

I would not overstate it as a universal law, though. It is a very strong empirical regularity for this specific setup: rowwise low-bit quantization, pruning of small codes, byte shuffle, then Brotli. It can break if two matrices have the same RMS but different outlier structure, different max/RMS, different scale distributions, different bit assignments, or if one tensor bypasses the low-bit path. In short, RMS is an excellent first-order proxy for the rate term of the deployed artifact.

The deeper point, to me, is that this is really a rate-distortion problem. RMS tells you a lot about the rate side, meaning how many bytes the matrix will want after quantization and compression. It does not tell you the whole distortion side, meaning how much loss you incur if you make that matrix smaller. I have seen cases where a matrix looks benign by raw MSE but is catastrophic by Hessian-weighted error.

What I would recommend is to use weight decay as a crude shadow price on bytes, then spend the saved bytes on the matrices whose Hessian-weighted quantization damage is worst.

Because there were a lot of great decisions made and it's not obvious how you made them.

The simplifications make sense in the local logic of that PR, though not all for the same reason. Once you have the compression lever via RMS and a 4096 vocab, removing lexical auxiliaries like hash embeddings and the smear gate becomes much easier to justify, because some of what they were buying is now being bought more directly by the tokenizer and by the larger core model. Likewise, if stronger regularization is making the main weight banks cheaper to quantize and compress, then spending the recovered budget on a wider MLP is a very coherent move.

At the same time, I do not think all of those removals should be read as settled truths. Some of them, especially the lexical extras, follow pretty naturally from the larger vocab; others feel more like strong empirical bets that also hand-wave away interactions that I suspect matter, particularly around matrix-specific quantization sensitivity and export behavior. My own not-yet-submitted SOTA PR has led me to somewhat different conclusions on a few of these choices. The space of good recipes is larger than any single record PR might suggest.

My own framing I keep coming back to is that Parameter Golf is not about training a model and then compressing it, although so far most PRs read like that is exactly what it is. It is about learning an equivalence class of functions, then choosing the member of that class whose quantized, side-informed, bank-packed serialization has the lowest task loss at 16MB.

Or, if you prefer it in fewer syllables: the true parameters are the bits.

Train weights that the quantizer will thank you for. This submission is what it looks like when someone starts pulling on that thread. There is a lot more to find in this direction. Go further: shape the training dynamics from the ground up so the learned solution already lives in a compression-friendly, distortion-stable basin. You can push a surprisingly large model through the 16 MB bottleneck and have it come out the other side intact.

dexhunter added a commit to dexhunter/parameter-golf that referenced this pull request Apr 2, 2026
…1.0929 (3-seed mean)

Adds three techniques to PR openai#1218's 4096-vocab high-WD stack:
- MuonEq-R optimizer (row-norm before NS5 orthogonalization)
- Depth recurrence on layers 4,5 (shared MLP, zero extra params)
- Mixed int5/int6 GPTQ via Hessian sensitivity ranking

3-seed mean: 1.0929 BPB / 2.5145 nats
All seeds under 16MB (max: 15,981,324 bytes)
No TTT, no SLOT, no eval-time adaptation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants